Low rank approximation of positive semi-definite symmetric matrices using Gaussian elimination and volume sampling
نویسندگان
چکیده
Positive semi-definite matrices commonly occur as normal of least squares problems in statistics or kernel machine learning and approximation theory. They are typically large dense. Thus algorithms to solve systems with such a matrix can be very costly. A core idea reduce computational complexity is approximate the by one low rank. The optimal well understood choice based on eigenvalue decomposition matrix. Unfortunately, this computationally expensive. Cheaper methods Gaussian elimination but they require pivoting. We show how invariant theory provides explicit error formulas for an averaged volume sampling. formula leads ratios elementary symmetric polynomials eigenvalues. discuss several bounds expected norm include examples where computed exactly. References A. Dax. “On extremum properties orthogonal quotients matrices”. In: Lin. Alg. Appl. 432.5 (2010), pp. 1234–1257. doi: 10.1016/j.laa.2009.10.034. M. Dereziński W. Mahoney. Determinantal Point Processes Randomized Numerical Linear Algebra. 2020. url: https://arxiv.org/abs/2005.03185. Deshpande, L. Rademacher, S. Vempala, G. Wang. “Matrix projective clustering via sampling”. Proceedings Seventeenth Annual ACM-SIAM Symposium Discrete Algorithm. SODA ’06. Miami, Florida: Society Industrial Applied Mathematics, 2006, 1117–1126. https://dl.acm.org/doi/abs/10.5555/1109557.1109681. Goreinov, E. Tyrtyshnikov, N. Zamarashkin. “A pseudoskeleton approximations”. 261.1 (1997), 1–21. 10.1016/S0024-3795(96)00301-1. Mahoney P. Drineas. “CUR decompositions improved data analysis”. Proc. Nat. Acad. Sci. 106.3 (Jan. 20, 2009), 697–702. 10.1073/pnas.0803205106. Marcus Lopes. “Inequalities functions Hermitian Can. J. Math. 9 (1957), 305–312. 10.4153/CJM-1957-037-9.
منابع مشابه
Learning general Gaussian kernel hyperparameters of SVMs using optimization on symmetric positive-definite matrices manifold
We propose a new method for general Gaussian kernel hyperparameter optimization for support vector machines classification. The hyperparameters are constrained to lie on a differentiable manifold. The proposed optimization technique is based on a gradient-like descent algorithm adapted to the geometrical structure of the manifold of symmetric positive-definite matrices. We compare the performan...
متن کاملProduct of three positive semi-definite matrices
In [2], the author showed that a square matrix with nonnegative determinant can always be written as the product of five or fewer positive semi-definite matrices. This is an extension to the result in [1] asserting that every matrix with positive determinant is the product of five or fewer positive definite matrices. Analogous to the analysis in [1], the author of [2] studied those matrices whi...
متن کاملDDtBe for Band Symmetric Positive Definite Matrices
We present a new parallel factorization for band symmetric positive definite (s.p.d) matrices and show some of its applications. Let A be a band s.p.d matrix of order n and half bandwidth m. We show how to factor A as A =DDt Be using approximately 4nm2 jp parallel operations where p =21: is the number of processors. Having this factorization, we improve the time to solve Ax = b by a factor of m...
متن کاملA Sparse Decomposition of Low Rank Symmetric Positive Semidefinite Matrices
Suppose that A ∈ RN×N is symmetric positive semidefinite with rank K ≤ N . Our goal is to decompose A into K rank-one matrices ∑K k=1 gkg T k where the modes {gk} K k=1 are required to be as sparse as possible. In contrast to eigen decomposition, these sparse modes are not required to be orthogonal. Such a problem arises in random field parametrization where A is the covariance function and is ...
متن کاملKernel Density Estimation on Spaces of Gaussian Distributions and Symmetric Positive Definite Matrices
This paper analyses the kernel density estimation on spaces of Gaussian distributions endowed with different metrics. Explicit expressions of kernels are provided for the case of the 2-Wasserstein metric on multivariate Gaussian distributions and for the Fisher metric on multivariate centred distributions. Under the Fisher metric, the space of multivariate centred Gaussian distributions is isom...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Australian & New Zealand industrial and applied mathematics journal
سال: 2021
ISSN: ['1445-8810']
DOI: https://doi.org/10.21914/anziamj.v62.16036